In the last years several solutions were proposed to support people withvisual impairments or blindness during road crossing. These solutions focus oncomputer vision techniques for recognizing pedestrian crosswalks and computingtheir relative position from the user. Instead, this contribution addresses adifferent problem; the design of an auditory interface that can effectivelyguide the user during road crossing. Two original auditory guiding modes basedon data sonification are presented and compared with a guiding mode based onspeech messages. Experimental evaluation shows that there is no guiding mode that is bestsuited for all test subjects. The average time to align and cross is notsignificantly different among the three guiding modes, and test subjectsdistribute their preferences for the best guiding mode almost uniformly amongthe three solutions. From the experiments it also emerges that higher effort isnecessary for decoding the sonified instructions if compared to the speechinstructions, and that test subjects require frequent `hints' (in the form ofspeech messages). Despite this, more than 2/3 of test subjects prefer one ofthe two guiding modes based on sonification. There are two main reasons forthis: firstly, with speech messages it is harder to hear the sound of theenvironment, and secondly sonified messages convey information about the"quantity" of the expected movement.
展开▼